9 research outputs found

    Pinocchio-Based Adaptive zk-SNARKs and Secure/Correct Adaptive Function Evaluation

    Get PDF
    Pinocchio is a practical zk-SNARK that allows a prover to perform cryptographically verifiable computations with verification effort sometimes less than performing the computation itself. A recent proposal showed how to make Pinocchio adaptive (or ``hash-and-prove\u27\u27), i.e., to enable proofs with respect to computation-independent commitments. This enables computations to be chosen after the commitments have been produced, and for data to be shared in different computations in a flexible way. Unfortunately, this proposal is not zero-knowledge. In particular, it cannot be combined with Trinocchio, a system in which Pinocchio is outsourced to three workers that do not learn the inputs thanks to multi-party computation (MPC). In this paper, we show how to make Pinocchio adaptive in a zero-knowledge way; apply it to make Trinocchio work on computation-independent commitments; present tooling to easily program fleible verifiable computations (with or without MPC); and use it to build a prototype in a medical research case study

    Data Minimisation in Communication Protocols: A Formal Analysis Framework and Application to Identity Management

    Full text link
    With the growing amount of personal information exchanged over the Internet, privacy is becoming more and more a concern for users. One of the key principles in protecting privacy is data minimisation. This principle requires that only the minimum amount of information necessary to accomplish a certain goal is collected and processed. "Privacy-enhancing" communication protocols have been proposed to guarantee data minimisation in a wide range of applications. However, currently there is no satisfactory way to assess and compare the privacy they offer in a precise way: existing analyses are either too informal and high-level, or specific for one particular system. In this work, we propose a general formal framework to analyse and compare communication protocols with respect to privacy by data minimisation. Privacy requirements are formalised independent of a particular protocol in terms of the knowledge of (coalitions of) actors in a three-layer model of personal information. These requirements are then verified automatically for particular protocols by computing this knowledge from a description of their communication. We validate our framework in an identity management (IdM) case study. As IdM systems are used more and more to satisfy the increasing need for reliable on-line identification and authentication, privacy is becoming an increasingly critical issue. We use our framework to analyse and compare four identity management systems. Finally, we discuss the completeness and (re)usability of the proposed framework

    Enabling analytics on sensitive medical data with secure multi-party computation

    Get PDF
    While there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multiparty computation can enable such data analytics by performing analytics without the need to share the underlying data. We discuss the issue of compliance to European privacy legislation; report on three pilots bringing these techniques closer to practice; and discuss the main challenges ahead to make fully privacy-preserving data analytics in the medical sector commonplace

    Universally verifiable multiparty computation from threshold homomorphic cryptosystems

    No full text
    Multiparty computation can be used for privacy-friendly outsourcing of computations on private inputs of multiple parties. A computation is outsourced to several computation parties; if not too many are corrupted (e.g., no more than half), then they cannot determine the inputs or produce an incorrect output. However, in many cases, these guarantees are not enough: we need correctness even if /all/ computation parties may be corrupted; and we need that correctness can be verified even by parties that did not participate in the computation. Protocols satisfying these additional properties are called ``universally verifiable''. In this paper, we propose a new security model for universally verifiable multiparty computation, and we present a practical construction, based on a threshold homomorphic cryptosystem. We also develop a multiparty protocol for jointly producing non-interactive zero-knowledge proofs, which may be of independent interest. Keywords: multiparty computation, verifiability, Fiat-Shamir heuristic, threshold homomorphic cryptosyste

    Symbolic Privacy Analysis through Linkability and Detectability

    No full text
    Part 1: Full PapersInternational audienceMore and more personal information is exchanged on-line using communication protocols. This makes it increasingly important that such protocols satisfy privacy by data minimisation. Formal methods have been used to verify privacy properties of protocols; but so far, mostly in an ad-hoc way. In previous work, we provided general definitions for the fundamental privacy concepts of linkability and detectability. However, this approach is only able to verify privacy properties for given protocol instances. In this work, by generalising the approach, we formally analyse privacy of communication protocols independently from any instance. We implement the model; identify its assumptions by relating it to the instantiated model; and show how to visualise results. To demonstrate our approach, we analyse privacy in Identity Mixer

    Linear programs used for testing

    No full text
    Linear programs used for benchmarking results presented in section 4 Hardware/Software used: Text editor/spreadsheet Data format: Tab-separated values (*.teau files); text files (eq-*) Source: Randomly generated Number of samples: 6 Size per sample: 5x5 to 288x20

    A formal privacy analysis of identity management systems

    No full text
    With the growing amount of personal information exchanged over the Internet, privacy is becoming more and more a concern for users. In particular, personal information is increasingly being exchanged in Identity Management (IdM) systems to satisfy the increasing need for reliable on-line identification and authentication. One of the key principles in protecting privacy is data minimization. This principle states that only the minimum amount of information necessary to accomplish a certain goal should be collected. Several privacy-enhancing IdM systems have been proposed to guarantee data minimization. However, currently there is no satisfactory way to assess and compare the privacy they offer in a precise way: existing analyses are either too informal and high-level, or specific for one particular system. In this work, we propose a general formal method to analyse privacy in systems in which personal information is communicated and apply it to analyse existing IdM systems. We first elicit privacy requirements for IdM systems through a study of existing systems and taxonomies, and show how these requirements can be verified by expressing knowledge of personal information in a three-layer model. Then, we apply the formal method to study four IdM systems, representative of different research streams, analyse the results in a broad context, and suggest improvements. Finally, we discuss the completeness and (re)usability of the proposed method

    Secure distributed key generation in attribute based encryption systems

    No full text
    \u3cp\u3eNowadays usage of cloud computing is increasing in popularity and this raises new data protection challenges. In such distributed systems it is unrealistic to assume that the servers are fully trusted in enforcing the access policies. Attribute Based Encryption (ABE) is one of the solutions proposed to tackle these trust problems. In ABE the data is encrypted using the access policy and authorized users can decrypt the data only using a secret key that is associated with their attributes. The secret key is generated by a Key Generation Authority (KGA), which in small systems can be constantly audited, therefore fully trusted. In contrast, in large and distrusted systems, trusting the KGAs is questionable. This paper presents a solution which increases the trust in ABE KGAs. The solution uses several KGAs which issue secret keys only for a limited number of users. One KGA issues a secret key associated with user's attributes and the other authorities issue independently secret keys associated with generalized values of user's attributes. Decryption is possible only if the secret keys associated with the non-generalized and generalized attributes are consistent. This mitigates the risk of unauthorized data disclosure when a couple of authorities are compromised.\u3c/p\u3
    corecore